skip to main content


Search for: All records

Creators/Authors contains: "Herman, G."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Use of structured roles to facilitate cooperative learning is an evidence-based practice that has been shown to improve student performance, attitude, and persistence. The combination of structured roles and activities also helps build students’ process skills including communication and metacognition. While these benefits have been shown in a variety of disciplines, most prior work has focused on in-person, synchronous settings, and few studies have looked at online, synchronous settings. With the ongoing COVID-19 pandemic, we need a better understanding of how cooperative learning takes place online and what differences may exist between online and in-person modalities. This work-in-progress serves to document our development of an observation protocol to help us answer research questions such as the following: Do group members participate equally? Do group members’ contributions match their role? How do groups connect and bond with each other? How do groups seek help? 
    more » « less
  2. Collaborative learning can improve student learning, student persistence, and the classroom climate. While work has documented the tradeoffs of face-to-face collaboration and asynchronous, online learning, the trade-offs between asynchronous (student-scheduled) and synchronous (instructor-scheduled) collaborative and online learning have not been explored. Structured roles can maximize the effectiveness of collaborative learning by helping all students participate, but structured roles have not been studied in online settings. We performed a quasi-experimental study in two courses—Computer Architecture and Numerical Methods—to compare the effects of asynchronous collaborative learning without structured roles to synchronous collaborative learning with structured roles. We use a data-analytics approach to examine how these approaches affected the student learning experience during formative collaborative learning assessments. Teams in the synchronous offering made higher scoring submissions (5-10% points better on average), finished assessments more efficiently (11-16 minutes faster on average), and had greater equality in the total number of submissions each student made (for example, significant increase of 13% in the mean equality score among all groups). 
    more » « less
  3. We present a psychometric evaluation of a revised version of the Cybersecurity Concept Inventory (CCI), completed by 354 students from 29 colleges and universities. The CCI is a conceptual test of understanding created to enable research on instruction quality in cybersecurity education. This work extends previous expert review and small-scale pilot testing of the CCI. Results show that the CCI aligns with a curriculum many instructors expect from an introductory cybersecurity course, and that it is a valid and reliable tool for assessing what conceptual cybersecurity knowledge students learned. 
    more » « less
  4. We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect an-swer choices) in multiple-choice concept inventories (conceptual tests of under-standing). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a repre-sentative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out re-sponses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective dis-tractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the tra-ditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments. 
    more » « less
  5. We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of under-standing). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a repre-sentative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective dis-tractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments. 
    more » « less
  6. For two days in February 2018, 17 cybersecurity ed- ucators and professionals from government and in- dustry met in a “hackathon” to refine existing draft multiple-choice test items, and to create new ones, for a Cybersecurity Concept Inventory (CCI) and Cyber- security Curriculum Assessment (CCA) being devel- oped as part of the Cybersecurity Assessment Tools (CATS) Project. We report on the results of the CATS Hackathon, discussing the methods we used to develop test items, highlighting the evolution of a sample test item through this process, and offer- ing suggestions to others who may wish to organize similar hackathons. Each test item embodies a scenario, question stem, and five answer choices. During the Hackathon, par- ticipants organized into teams to (1) Generate new scenarios and question stems, (2) Extend CCI items into CCA items, and generate new answer choices for new scenarios and stems, and (3) Review and refine draft CCA test items. The CATS Project provides rigorous evidence- based instruments for assessing and evaluating educa- tional practices; these instruments can help identify pedagogies and content that are effective in teach- ing cybersecurity. The CCI measures how well stu- dents understand basic concepts in cybersecurity— especially adversarial thinking—after a first course in the field. The CCA measures how well students understand core concepts after completing a full cy- bersecurity curriculum. 
    more » « less